Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-37792655

RESUMO

Neurodegenerative disease often affects speech. Speech acoustics can be used as objective clinical markers of pathology. Previous investigations of pathological speech have primarily compared controls with one specific condition and excluded comorbidities. We broaden the utility of speech markers by examining how multiple acoustic features can delineate diseases. We used supervised machine learning with gradient boosting (CatBoost) to delineate healthy speech from speech of people with multiple sclerosis or Friedreich ataxia. Participants performed a diadochokinetic task where they repeated alternating syllables. We subjected 74 spectral and temporal prosodic features from the speech recordings to machine learning. Results showed that Friedreich ataxia, multiple sclerosis and healthy controls were all identified with high accuracy (over 82%). Twenty-one acoustic features were strong markers of neurodegenerative diseases, falling under the categories of spectral qualia, spectral power, and speech rate. We demonstrated that speech markers can delineate neurodegenerative diseases and distinguish healthy speech from pathological speech with high accuracy. Findings emphasize the importance of examining speech outcomes when assessing indicators of neurodegenerative disease. We propose large-scale initiatives to broaden the scope for differentiating other neurological diseases and affective disorders.


Assuntos
Ataxia de Friedreich , Esclerose Múltipla , Doenças Neurodegenerativas , Humanos , Ataxia de Friedreich/diagnóstico , Ataxia de Friedreich/psicologia , Acústica da Fala , Esclerose Múltipla/diagnóstico , Aprendizado de Máquina Supervisionado
2.
Behav Brain Res ; 450: 114498, 2023 07 26.
Artigo em Inglês | MEDLINE | ID: mdl-37201892

RESUMO

The medial geniculate body (MGB) of the thalamus is an obligatory relay for auditory processing. A breakdown of adaptive filtering and sensory gating at this level may lead to multiple auditory dysfunctions, while high-frequency stimulation (HFS) of the MGB might mitigate aberrant sensory gating. To further investigate the sensory gating functions of the MGB, this study (i) recorded electrophysiological evoked potentials in response to continuous auditory stimulation, and (ii) assessed the effect of MGB HFS on these responses in noise-exposed and control animals. Pure-tone sequences were presented to assess differential sensory gating functions associated with stimulus pitch, grouping (pairing), and temporal regularity. Evoked potentials were recorded from the MGB and acquired before and after HFS (100 Hz). All animals (unexposed and noise-exposed, pre- and post-HFS) showed gating for pitch and grouping. Unexposed animals also showed gating for temporal regularity not found in noise-exposed animals. Moreover, only noise-exposed animals showed restoration comparable to the typical EP amplitude suppression following MGB HFS. The current findings confirm adaptive thalamic sensory gating based on different sound characteristics and provide evidence that temporal regularity affects MGB auditory signaling.


Assuntos
Córtex Auditivo , Tálamo , Ratos , Animais , Tálamo/fisiologia , Corpos Geniculados/fisiologia , Estimulação Acústica , Sensação , Filtro Sensorial , Córtex Auditivo/fisiologia
3.
J Voice ; 37(6): 969.e23-969.e41, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34272139

RESUMO

PURPOSE: The human voice qualitatively changes across the lifespan. Although some of these vocal changes may be pathologic, other changes likely reflect natural physiological aging. Normative data for voice characteristics in healthy aging is limited and disparate studies have used a range of different acoustic features, some of which are implicated in pathologic voice changes. We examined the perceptual and acoustic features that predict healthy aging. METHOD: Participants (N = 150) aged between 50 and 92 years performed a sustained vowel task. Acoustic features were measured using the Multi-Dimensional Voice Program and the Analysis of Dysphonia in Speech and Voice. We used forward and backward variable elimination techniques based on the Bayesian information criterion and linear regression to assess which of these acoustic features predict age and perceptual features. Hearing thresholds were determined using pure-tone audiometry tests at frequencies 250 Hz, 500 Hz, 1000 Hz, 2000 Hz, and 4000 Hz. We further explored potential relationships between these acoustic features and clinical assessments of voice quality using the Consensus Auditory-Perceptual Evaluation of Voice. RESULTS: Chronological age was significantly predicted by greater voice turbulence, variability of cepstral fundamental frequency, low relative to high spectral energy, and cepstral intensity. When controlling for hearing loss, age was significantly predicted by amplitude perturbations and cepstral intensity. Clinical assessments of voice indicated perceptual characteristics of speech were predicted by different acoustic features. For example, breathiness was predicted by the soft phonation index, mean cepstral peak prominence, mean low-high spectral ratio, and mean cepstral intensity. CONCLUSIONS: Findings suggest that acoustic features that predict healthy aging are different than those previously reported for the pathologic voice. We propose a model of healthy and pathologic voice development in which voice characteristics are mediated by the inability to monitor vocal productions associated with age-related hearing loss. This normative data of healthy vocal aging may assist in separating voice pathologies from healthy aging.


Assuntos
Disfonia , Envelhecimento Saudável , Perda Auditiva , Humanos , Pessoa de Meia-Idade , Idoso , Idoso de 80 Anos ou mais , Estudos Transversais , Teorema de Bayes , Acústica da Fala , Acústica , Disfonia/diagnóstico , Medida da Produção da Fala/métodos
4.
Psychol Music ; 50(6): 1721-1739, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36381385

RESUMO

Providing natural opportunities that scaffold interpersonal engagement is important for supporting social interactions for young children with Autism spectrum disorder (ASD). Musical activities are often motivating, familiar, and predictable, and may support both children and their interaction partners by providing opportunities for shared social engagement. We assessed multiple facets of nonverbal social engagement - child and caregiver visual attention and interpersonal movement coordination - during musical (song) and non-musical (picture) book-sharing contexts in caregiver-child dyads of preschoolers with (n = 13) and without (n = 16) ASD. Overall, children with ASD demonstrated reduced visual attention during the book sharing activity, as well as reduced movement coordination with their caregivers, compared to children with typical development. Children in both diagnostic groups, as well as caregivers, demonstrated greater visual attention (gaze toward the activity and/or social partner) during song books compared to picture books. Visual attention behavior was correlated between children and caregivers in the ASD group but only in the song book condition. Findings highlight the importance of considering how musical contexts impact the behavior of both partners in the interaction. Musical activities may support social engagement by modulating the behavior of both children and caregivers.

5.
J Neurosci ; 42(11): 2313-2326, 2022 03 16.
Artigo em Inglês | MEDLINE | ID: mdl-35086905

RESUMO

During multisensory speech perception, slow δ oscillations (∼1-3 Hz) in the listener's brain synchronize with the speech signal, likely engaging in speech signal decomposition. Notable fluctuations in the speech amplitude envelope, resounding speaker prosody, temporally align with articulatory and body gestures and both provide complementary sensations that temporally structure speech. Further, δ oscillations in the left motor cortex seem to align with speech and musical beats, suggesting their possible role in the temporal structuring of (quasi)-rhythmic stimulation. We extended the role of δ oscillations to audiovisual asynchrony detection as a test case of the temporal analysis of multisensory prosody fluctuations in speech. We recorded Electroencephalograph (EEG) responses in an audiovisual asynchrony detection task while participants watched videos of a speaker. We filtered the speech signal to remove verbal content and examined how visual and auditory prosodic features temporally (mis-)align. Results confirm (1) that participants accurately detected audiovisual asynchrony, and (2) increased δ power in the left motor cortex in response to audiovisual asynchrony. The difference of δ power between asynchronous and synchronous conditions predicted behavioral performance, and (3) decreased δ-ß coupling in the left motor cortex when listeners could not accurately map visual and auditory prosodies. Finally, both behavioral and neurophysiological evidence was altered when a speaker's face was degraded by a visual mask. Together, these findings suggest that motor δ oscillations support asynchrony detection of multisensory prosodic fluctuation in speech.SIGNIFICANCE STATEMENT Speech perception is facilitated by regular prosodic fluctuations that temporally structure the auditory signal. Auditory speech processing involves the left motor cortex and associated δ oscillations. However, visual prosody (i.e., a speaker's body movements) complements auditory prosody, and it is unclear how the brain temporally analyses different prosodic features in multisensory speech perception. We combined an audiovisual asynchrony detection task with electroencephalographic (EEG) recordings to investigate how δ oscillations support the temporal analysis of multisensory speech. Results confirmed that asynchrony detection of visual and auditory prosodies leads to increased δ power in left motor cortex and correlates with performance. We conclude that δ oscillations are invoked in an effort to resolve denoted temporal asynchrony in multisensory speech perception.


Assuntos
Percepção da Fala , Estimulação Acústica , Percepção Auditiva/fisiologia , Eletroencefalografia , Humanos , Estimulação Luminosa , Fala , Percepção da Fala/fisiologia , Percepção Visual/fisiologia
6.
Cortex ; 134: 320-332, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33340879

RESUMO

Audio-motor integration is currently viewed as a predictive process in which the brain simulates upcoming sounds based on voluntary actions. This perspective does not consider how our auditory environment may trigger involuntary action in the absence of prediction. We address this issue by examining the relationship between acoustic salience and involuntary motor responses. We investigate how acoustic features in music contribute to the perception of salience, and whether those features trigger involuntary peripheral motor responses. Participants with little-to-no musical training listened to musical excerpts once while remaining still during the recording of their muscle activity with surface electromyography (sEMG), and again while they continuously rated perceived salience within the music using a slider. We show cross-correlations between 1) salience ratings and acoustic features, 2) acoustic features and spontaneous muscle activity, and 3) salience ratings and spontaneous muscle activity. Amplitude, intensity, and spectral centroid were perceived as the most salient features in music, and fluctuations in these features evoked involuntary peripheral muscle responses. Our results suggest an involuntary mechanism for audio-motor integration, which may rely on brainstem-spinal or brainstem-cerebellar-spinal pathways. Based on these results, we argue that a new framework is needed to explain the full range of human sensorimotor capabilities. This goal can be achieved by considering how predictive and reactive audio-motor integration mechanisms could operate independently or interactively to optimize human behavior.


Assuntos
Mapeamento Encefálico , Música , Estimulação Acústica , Acústica , Percepção Auditiva , Humanos
7.
J Acoust Soc Am ; 148(6): 3562, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33379897

RESUMO

Wearing face masks (alongside physical distancing) provides some protection against infection from COVID-19. Face masks can also change how people communicate and subsequently affect speech signal quality. This study investigated how three common face mask types (N95, surgical, and cloth) affected acoustic analysis of speech and perceived intelligibility in healthy subjects. Acoustic measures of timing, frequency, perturbation, and power spectral density were measured. Speech intelligibility and word and sentence accuracy were also examined using the Assessment of Intelligibility of Dysarthric Speech. Mask type impacted the power distribution in frequencies above 3 kHz for the N95 mask, and above 5 kHz in surgical and cloth masks. Measures of timing and spectral tilt mainly differed with N95 mask use. Cepstral and harmonics to noise ratios remained unchanged across mask type. No differences were observed across conditions for word or sentence intelligibility measures; however, accuracy of word and sentence translations were affected by all masks. Data presented in this study show that face masks change the speech signal, but some specific acoustic features remain largely unaffected (e.g., measures of voice quality) irrespective of mask type. Outcomes have bearing on how future speech studies are run when personal protective equipment is worn.


Assuntos
COVID-19/prevenção & controle , Máscaras/efeitos adversos , Acústica da Fala , Inteligibilidade da Fala , Adulto , Feminino , Humanos , Masculino , SARS-CoV-2 , Qualidade da Voz , Adulto Jovem
8.
J Neurosci Methods ; 343: 108830, 2020 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-32603812

RESUMO

BACKGROUND: Researchers rely on the specified capabilities of their hardware and software even though, in reality, these capabilities are often not achieved. Considering that the number of experiments examining neural oscillations has increased steadily, easy-to-implement tools for testing the capabilities of hardware and software are necessary. NEW METHOD: We present an open-source MATLAB toolbox, the Schultz Cigarette Burn Toolbox (SCiBuT) that allows users to benchmark the capabilities of their visual display devices and align neural and behavioral responses with veridical timing of visual stimuli. Specifically, the toolbox marks the corners of the display with black or white squares to indicate the timing of the onset of static images and the timing of frame changes within videos. Using basic hardware (i.e., a photodiode, an Arduino microcontroller, and an analogue input box), the light changes in the corner of the screen can be captured and synchronized with EEG recordings and/or behavioral responses. RESULTS: We demonstrate that the SCiBuT is sensitive to framerate inconsistencies and provide examples of hardware setups that are suboptimal for measuring fine timing. Finally, we show that inconsistencies in framerate during video presentation can affect EEG oscillations. CONCLUSIONS: The SCiBuT provides tools to benchmark framerates and frame changes and to synchronize frame changes with neural and behavioral signals. This is the first open-source toolbox that can perform these functions. The SCiBuT can be freely downloaded (www.band-lab.com/scibut) and be used during experimental trials to improve the accuracy and precision of timestamps to ensure videos are presented at the intended framerate.


Assuntos
Computadores , Software
9.
Proc Biol Sci ; 286(1911): 20191116, 2019 09 25.
Artigo em Inglês | MEDLINE | ID: mdl-31551056

RESUMO

Most human communication is carried by modulations of the voice. However, a wide range of cultures has developed alternative forms of communication that make use of a whistled sound source. For example, whistling is used as a highly salient signal for capturing attention, and can have iconic cultural meanings such as the catcall, enact a formal code as in boatswain's calls or stand as a proxy for speech in whistled languages. We used real-time magnetic resonance imaging to examine the muscular control of whistling to describe a strong association between the shape of the tongue and the whistled frequency. This bioacoustic profile parallels the use of the tongue in vowel production. This is consistent with the role of whistled languages as proxies for spoken languages, in which one of the acoustical features of speech sounds is substituted with a frequency-modulated whistle. Furthermore, previous evidence that non-human apes may be capable of learning to whistle from humans suggests that these animals may have similar sensorimotor abilities to those that are used to support speech in humans.


Assuntos
Imageamento por Ressonância Magnética , Canto , Fala , Acústica , Humanos , Língua
10.
Psychol Res ; 83(3): 419-431, 2019 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-30805705

RESUMO

Auditory feedback of actions provides additional information about the timing of one's own actions and those of others. However, little is known about how musicians and nonmusicians integrate auditory feedback from multiple sources to regulate their own timing or to (intentionally or unintentionally) coordinate with a partner. We examined how musical expertise modulates the role of auditory feedback in a two-person synchronization-continuation tapping task. Pairs of individuals were instructed to tap at a rate indicated by an initial metronome cue in all four auditory feedback conditions: no feedback, self-feedback (cannot hear their partner), other feedback (cannot hear themselves), or full feedback (both self and other). Participants within a pair were either both musically trained (musicians), both untrained (nonmusicians), or one musically trained and one untrained (mixed). Results demonstrated that all three pair types spontaneously synchronized with their partner when receiving other or full feedback. Moreover, all pair types were better at maintaining the metronome rate with self-feedback than with no feedback. Musician pairs better maintained the metronome rate when receiving other feedback than when receiving no feedback; in contrast, nonmusician pairs were worse when receiving other or full feedback compared to no feedback. Both members of mixed pairs maintained the metronome rate better in the other and full feedback conditions than in the no feedback condition, similar to musician pairs. Overall, nonmusicians benefited from musicians' expertise without negatively influencing musicians' ability to maintain the tapping rate. One implication is that nonmusicians may improve their beat-keeping abilities by performing tasks with musically skilled individuals.


Assuntos
Estimulação Acústica , Percepção Auditiva/fisiologia , Retroalimentação Sensorial/fisiologia , Articulações/fisiologia , Destreza Motora/fisiologia , Música , Adolescente , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
11.
Psychol Res ; 83(5): 907-923, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-28916843

RESUMO

Forming temporal expectancies plays a crucial role in our survival as it allows us to identify the occurrence of temporal deviants that might signal potential dangers. The dynamic attending theory suggests that temporal expectancies are formed more readily for rhythms that imply a beat (i.e., metrical rhythms) compared to those that do not (i.e., nonmetrical rhythms). Moreover, metrical frameworks can be used to detect temporal deviants. Although several studies have demonstrated that congenital or early blindness correlates with modality-specific neural changes that reflect compensatory mechanisms, few have examined whether blind individuals show a learning advantage for auditory rhythms and whether learning can occur unintentionally and without awareness, that is, implicitly. We compared blind to sighted controls in their ability to implicitly learn metrical and nonmetrical auditory rhythms. We reasoned that the loss of sight in blindness might lead to improved sensitivity to rhythms and predicted that the blind learn rhythms more readily than the sighted. We further hypothesized that metrical rhythms are learned more readily than nonmetrical rhythms. Results partially confirmed our predictions; the blind group learned nonmetrical rhythms more readily than the sighted group but the blind group learned metrical rhythms less readily than the sighted group. Only the sighted group learned metrical rhythms more readily than nonmetrical rhythms. The blind group demonstrated awareness of the nonmetrical rhythms while learning was implicit for all other conditions. Findings suggest that improved deviant-sensitivity might have provided the blind group a learning advantage for nonmetrical rhythms. Future research could explore the plastic changes that affect deviance-detection and stimulus-specific adaptation in blindness.


Assuntos
Percepção Auditiva , Cegueira/psicologia , Aprendizagem , Estimulação Acústica , Adolescente , Adulto , Idoso , Conscientização , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
12.
Behav Res Methods ; 51(1): 204-234, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-29667081

RESUMO

The Musical Instrument Digital Interface (MIDI) was readily adopted for auditory sensorimotor synchronization experiments. These experiments typically use MIDI percussion pads to collect responses, a MIDI-USB converter (or MIDI-PCI interface) to record responses on a PC and manipulate feedback, and an external MIDI sound module to generate auditory feedback. Previous studies have suggested that auditory feedback latencies can be introduced by these devices. The Schultz MIDI Benchmarking Toolbox (SMIDIBT) is an open-source, Arduino-based package designed to measure the point-to-point latencies incurred by several devices used in the generation of response-triggered auditory feedback. Experiment 1 showed that MIDI messages are sent and received within 1 ms (on average) in the absence of any external MIDI device. Latencies decreased when the baud rate increased above the MIDI protocol default (31,250 bps). Experiment 2 benchmarked the latencies introduced by different MIDI-USB and MIDI-PCI interfaces. MIDI-PCI was superior to MIDI-USB, primarily because MIDI-USB is subject to USB polling. Experiment 3 tested three MIDI percussion pads. Both the audio and MIDI message latencies were significantly greater than 1 ms for all devices, and there were significant differences between percussion pads and instrument patches. Experiment 4 benchmarked four MIDI sound modules. Audio latencies were significantly greater than 1 ms, and there were significant differences between sound modules and instrument patches. These experiments suggest that millisecond accuracy might not be achievable with MIDI devices. The SMIDIBT can be used to benchmark a range of MIDI devices, thus allowing researchers to make informed decisions when choosing testing materials and to arrive at an acceptable latency at their discretion.


Assuntos
Percepção Auditiva , Pesquisa Comportamental/instrumentação , Retroalimentação Sensorial , Benchmarking , Dedos , Humanos , Percussão , Som
13.
Conscious Cogn ; 46: 173-187, 2016 11.
Artigo em Inglês | MEDLINE | ID: mdl-27764684

RESUMO

Philosophers have proposed that when people coordinate their actions with others they may experience a sense of joint agency, or shared control over actions and their effects. However, little empirical work has investigated the sense of joint agency. In the current study, pairs coordinated their actions to produce tone sequences and then rated their sense of joint agency on a scale ranging from shared to independent control. People felt more shared than independent control overall, confirming that people experience joint agency during joint action. Furthermore, people felt stronger joint agency when they (a) produced sequences that required mutual coordination compared to sequences in which only one partner had to coordinate with the other, (b) held the role of follower compared to leader, and (c) were better coordinated with their partner. Thus, the strength of joint agency is influenced by the degree to which people mutually coordinate with each other's actions.


Assuntos
Comportamento Cooperativo , Relações Interpessoais , Desempenho Psicomotor/fisiologia , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
14.
PLoS One ; 11(2): e0149438, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26907605

RESUMO

BACKGROUND: Ecstasy use has been associated with short-term and long-term memory deficits on a standard Word Learning Task (WLT). The clinical relevance of this has been debated and is currently unknown. The present study aimed at evaluating the clinical relevance of verbal memory impairment in Ecstasy users. To that end, clinical memory impairment was defined as decrement in memory performance that exceeded the cut-off value of 1.5 times the standard deviation of the average score in the healthy control sample. The primary question was whether being an Ecstasy user (E-user) was predictive of having clinically deficient memory performance compared to a healthy control group. METHODS: WLT data were pooled from four experimental MDMA studies that compared memory performance during placebo and MDMA intoxication. Control data were taken from healthy volunteers with no drug use history who completed the WLT as part of a placebo-controlled clinical trial. This resulted in a sample size of 65 E-users and 65 age- and gender-matched healthy drug-naïve controls. All participants were recruited by similar means and were tested at the same testing facilities using identical standard operating procedures. Data were analyzed using linear mixed-effects models, Bayes factor, and logistic regressions. RESULTS: Findings were that verbal memory performance of placebo-treated E-users did not differ from that of controls, and there was substantial evidence in favor of the null hypothesis. History of use was not predictive of memory impairment. During MDMA intoxication of E-users, verbal memory was impaired. CONCLUSION: The combination of the acute and long-term findings demonstrates that, while clinically relevant memory impairment is present during intoxication, it is absent during abstinence. This suggests that use of Ecstasy/MDMA does not lead to clinically deficient memory performance in the long term. Additionally, it has to be investigated whether the current findings apply to more complex cognitive measures in diverse 'user categories' using a combination of genetics, imaging techniques and neuropsychological assessments.


Assuntos
Transtornos da Memória/induzido quimicamente , N-Metil-3,4-Metilenodioxianfetamina/toxicidade , Adolescente , Adulto , Estudos de Casos e Controles , Ensaios Clínicos Controlados como Assunto , Feminino , Humanos , Masculino , Aprendizagem Verbal/efeitos dos fármacos , Adulto Jovem
15.
Behav Res Methods ; 48(4): 1591-1607, 2016 12.
Artigo em Inglês | MEDLINE | ID: mdl-26542971

RESUMO

Timing abilities are often measured by having participants tap their finger along with a metronome and presenting tap-triggered auditory feedback. These experiments predominantly use electronic percussion pads combined with software (e.g., FTAP or Max/MSP) that records responses and delivers auditory feedback. However, these setups involve unknown latencies between tap onset and auditory feedback and can sometimes miss responses or record multiple, superfluous responses for a single tap. These issues may distort measurements of tapping performance or affect the performance of the individual. We present an alternative setup using an Arduino microcontroller that addresses these issues and delivers low-latency auditory feedback. We validated our setup by having participants (N = 6) tap on a force-sensitive resistor pad connected to the Arduino and on an electronic percussion pad with various levels of force and tempi. The Arduino delivered auditory feedback through a pulse-width modulation (PWM) pin connected to a headphone jack or a wave shield component. The Arduino's PWM (M = 0.6 ms, SD = 0.3) and wave shield (M = 2.6 ms, SD = 0.3) demonstrated significantly lower auditory feedback latencies than the percussion pad (M = 9.1 ms, SD = 2.0), FTAP (M = 14.6 ms, SD = 2.8), and Max/MSP (M = 15.8 ms, SD = 3.4). The PWM and wave shield latencies were also significantly less variable than those from FTAP and Max/MSP. The Arduino missed significantly fewer taps, and recorded fewer superfluous responses, than the percussion pad. The Arduino captured all responses, whereas at lower tapping forces, the percussion pad missed more taps. Regardless of tapping force, the Arduino outperformed the percussion pad. Overall, the Arduino is a high-precision, low-latency, portable, and affordable tool for auditory experiments.


Assuntos
Pesquisa Comportamental/instrumentação , Retroalimentação Sensorial , Software , Adulto , Pesquisa Comportamental/métodos , Feminino , Dedos/fisiologia , Humanos , Masculino , Adulto Jovem
16.
PLoS One ; 8(9): e75163, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24086461

RESUMO

Implicit learning (IL) occurs unconsciously and without intention. Perceptual fluency is the ease of processing elicited by previous exposure to a stimulus. It has been assumed that perceptual fluency is associated with IL. However, the role of perceptual fluency following IL has not been investigated in temporal pattern learning. Two experiments by Schultz, Stevens, Keller, and Tillmann demonstrated the IL of auditory temporal patterns using a serial reaction-time task and a generation task based on the process dissociation procedure. The generation task demonstrated that learning was implicit in both experiments via motor fluency, that is, the inability to suppress learned information. With the aim to disentangle conscious and unconscious processes, we analyze unreported recognition data associated with the Schultz et al. experiments using the sequence identification measurement model. The model assumes that perceptual fluency reflects unconscious processes and IL. For Experiment 1, the model indicated that conscious and unconscious processes contributed to recognition of temporal patterns, but that unconscious processes had a greater influence on recognition than conscious processes. In the model implementation of Experiment 2, there was equal contribution of conscious and unconscious processes in the recognition of temporal patterns. As Schultz et al. demonstrated IL in both experiments using a generation task, and the conditions reported here in Experiments 1 and 2 were identical, two explanations are offered for the discrepancy in model and behavioral results based on the two tasks: 1) perceptual fluency may not be necessary to infer IL, or 2) conscious control over implicitly learned information may vary as a function of perceptual fluency and motor fluency.


Assuntos
Aprendizagem/fisiologia , Modelos Psicológicos , Reconhecimento Fisiológico de Modelo/fisiologia , Inconsciente Psicológico , Estimulação Acústica , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , New South Wales
17.
Q J Exp Psychol (Hove) ; 66(2): 360-80, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-22943558

RESUMO

Implicit learning (IL) occurs unintentionally. IL of temporal patterns has received minimal attention, and results are mixed regarding whether IL of temporal patterns occurs in the absence of a concurrent ordinal pattern. Two experiments examined the IL of temporal patterns and the conditions under which IL is exhibited. Experiment 1 examined whether uncertainty of the upcoming stimulus identity obscures learning. Based on probabilistic uncertainty, it was hypothesized that stimulus-detection tasks are more sensitive to temporal learning than multiple-alternative forced-choice tasks because of response uncertainty in the latter. Results demonstrated IL of metrical patterns in the stimulus-detection but not the multiple-alternative task. Experiment 2 investigated whether properties of rhythm (i.e., meter) benefit IL using the stimulus-detection task. The metric binding hypothesis states that metrical frameworks guide attention to periodic points in time. Based on the metric binding hypothesis, it was hypothesized that metrical patterns are learned faster than nonmetrical patterns. Results demonstrated learning of metrical and nonmetrical patterns but metrical patterns were not learned more readily than nonmetrical patterns. However, abstraction of a metrical framework was still evident in the metrical condition. The present study shows IL of auditory temporal patterns in the absence of an ordinal pattern.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Aprendizagem/fisiologia , Percepção do Tempo/fisiologia , Estimulação Acústica , Adolescente , Adulto , Análise de Variância , Conscientização , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Probabilidade , Tempo de Reação/fisiologia , Retenção Psicológica , Aprendizagem Seriada , Fatores de Tempo , Incerteza , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...